linux 硬盘、RAID相关命令详解

您所在的位置:网站首页 a disk read error occurred是硬盘坏了吗 linux 硬盘、RAID相关命令详解

linux 硬盘、RAID相关命令详解

2023-05-21 08:33| 来源: 网络整理| 查看: 265

本文主要介绍linux下查看各种硬盘信息的命令如何使用,如何快速的查到自己想要的硬盘相关信息,各种命令做一个汇总。

Nvme-cli

Q1:如何查看nvme写缓存打开否:

[root@node83 product]# nvme id-ctrl /dev/nvme0n1 -H |grep -i cache [0:0] : 0 Volatile Write Cache Not Present [root@node83 product]# nvme get-feature -f 6 /dev/nvme0n1 NVMe Status:INVALID_FIELD: A reserved coded value or an unsupported value in a defined field(4002)

Q2:如何查看linux下的nvme盘序列号、型号、盘符、容量、format大小:

[root@node83 product]# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 PHLJ911104JB1P0FGN INTEL SSDPE2KX010T8 1 1.00 TB / 1.00 TB 512 B + 0 B VDV10131 /dev/nvme1n1 PHLJ911001A01P0FGN INTEL SSDPE2KX010T8 1 1.00 TB / 1.00 TB 512 B + 0 B VDV10131 /dev/nvme2n1 PHLJ911109NM1P0FGN INTEL SSDPE2KX010T8 1 1.00 TB / 1.00 TB 512 B + 0 B VDV10131 /dev/nvme3n1 PHLJ911000SZ1P0FGN INTEL SSDPE2KX010T8 1 1.00 TB / 1.00 TB 512 B + 0 B VDV10131

Command Description nvme connect 连接nvmeof subsystem nvme connect-all 连接所有nvmeof nvme disconnect 断开nvmeof nvme disconnect-all 断开所有nvmeof nvme get-feature 获取nvme特性 nvme list 查看所有连接到当前系统的nvme设备:名称,序列号,大小,LBA 和 serial nvme id-ctrl 展示nvme 控制器和它所支持的一些特性 nvme id-ns 展示nvme 的命名空间,优化特性和支持特性 nvme format 安全擦除SSD上的数据,格式化LBA大小或保护信息以实现端到端数据保护 nvme sanitize 安全得擦除SSD上的所有数据 nvme smart-log 查看NVME的smart log信息:page的健康状态,温度,稳定性情况等 nvme fw-log 查看NVME的固件日志,会打印每个entry的健康情况 nvme error-log NVME的异常日志 nvme reset 重置NVME的控制器 nvme help 查看帮助信息 nvme delete-ns 指定设备删除一个命名空间 nvme create-ns 指定设备创建命名空间。比如可以为一个设备创建一个较小大小的命名空间,从而提升SSD的稳定性,性能和延时?(具体原理还不太清楚) nvme fw-download 为一个设备下载一个新的固件系统 nvme fw-commit 让固件立即运行

lsblk

Q1:如何查看硬盘是SDD还是HDD?

[root@node83 product]# lsblk -o name,rota NAME ROTA sdf 1 nvme0n1 0 ├─nvme0n1p5 0 ├─nvme0n1p3 0 ├─nvme0n1p1 0 ├─nvme0n1p6 0 ├─nvme0n1p4 0 └─nvme0n1p2 0 sdo 1 sdd 1

Q2:查看硬盘名称、旋转、类型、序列号、最优IO大小、最小IO大小、挂载点、容量?

[root@node83 product]# lsblk -o name,rota,type,serial,OPT-IO,MIN-IO,MOUNTPOINT,SIZE NAME ROTA TYPE SERIAL OPT-IO MIN-IO MOUNTPOINT SIZE sdf 1 disk 50000399c84b3f0d 0 4096 1.7T nvme0n1 0 disk PHLJ911104JB1P0FGN 0 512 931.5G ├─nvme0n1p5 0 part 0 512 /var/lib/ceph/osd/ceph-4 100M ├─nvme0n1p3 0 part 0 512 /var/lib/ceph/osd/ceph-3 100M ├─nvme0n1p1 0 part 0 512 /var/lib/ceph/osd/ceph-0 100M ├─nvme0n1p6 0 part 0 512 64G ├─nvme0n1p4 0 part 0 512 64G └─nvme0n1p2 0 part 0 512 803G sdo 1 disk 50000399c84b6531 0 4096 1.7T sdd 1 disk 50000399c84b3ba5 0 4096 1.7T

Q3:显示scsi设备?

[root@node83 product]# lsblk -S NAME HCTL TYPE VENDOR MODEL REV TRAN sdf 0:2:9:0 disk TOSHIBA AL15SEB18EQ 1403 sdo 0:2:18:0 disk TOSHIBA AL15SEB18EQ 1403 sdd 0:2:7:0 disk TOSHIBA AL15SEB18EQ 1403 sdm 0:2:16:0 disk TOSHIBA AL15SEB18EQ 1403 sdb 0:2:5:0 disk TOSHIBA AL15SEB18EQ 1403 sdk 0:2:14:0 disk TOSHIBA AL15SEB18EQ 1403 sdi 0:2:12:0 disk TOSHIBA AL15SEB18EQ 1403 sdg 0:2:10:0 disk TOSHIBA AL15SEB18EQ 1403 sde 0:2:8:0 disk TOSHIBA AL15SEB18EQ 1403 sdn 0:2:17:0 disk TOSHIBA AL15SEB18EQ 1403 sdc 0:2:6:0 disk TOSHIBA AL15SEB18EQ 1403 sdl 0:2:15:0 disk TOSHIBA AL15SEB18EQ 1403 sda 0:2:4:0 disk TOSHIBA AL15SEB18EQ 1403 sdj 0:2:13:0 disk TOSHIBA AL15SEB18EQ 1403 sdh 0:2:11:0 disk TOSHIBA AL15SEB18EQ 1403 sdp 0:2:19:0 disk TOSHIBA AL15SEB18EQ 1403

Q4:显示硬盘属于的组合读写权限?

[root@node83 product]# lsblk -m NAME SIZE OWNER GROUP MODE sdf 1.7T root disk brw-rw---- nvme0n1 931.5G root disk brw-rw---- ├─nvme0n1p5 100M root disk brw-rw---- ├─nvme0n1p3 100M root disk brw-rw---- ├─nvme0n1p1 100M ceph ceph brw-rw---- ├─nvme0n1p6 64G ceph ceph brw-rw---- ├─nvme0n1p4 64G ceph ceph brw-rw---- └─nvme0n1p2 803G ceph ceph brw-rw---- sdo 1.7T root disk brw-rw---- sdd 1.7T root disk brw-rw----

lsblk -o 可以跟许多参数具体可以通过lsblk -h查看,选择性打印想要的参数

smartctl

Q1:查看硬盘信息

smartctl -i /dev/sda # 查看是否支持smartctl smartctl -H /dev/sda # 查看硬盘健康状况 smartctl -a /dev/sda # 查看硬盘全部信息

Q2:硬盘设置

smartctl -x /dev/sda # 查看硬盘信息,查看Write cache状态 smartctl -s wcache,on /dev/sda # 打开写缓存 smartctl --smart=on --offlineauto=on --saveauto=on # 开启smart

Q3:查看硬盘写入单元数和读单元数?

[root@node77 SDS_Admin]# smartctl --all /dev/nvme0n1 | grep Written Data Units Written: 1,132,776,316 [579 TB] [root@node77 SDS_Admin]# smartctl --all /dev/nvme0n1 | grep Read Data Units Read: 817,942,826 [418 TB]

“data units write”这个选项,代表host 写到SSD 里面的数据量。1个单位代表1000个LBA单位(1个LBA 通常是512byte)”

Q4:如何计算硬盘WA(写放大)?

[root@node77 product]# nvme intel smart-log-add /dev/nvme1n1 Additional Smart Log for NVME device:nvme1n1 namespace-id:ffffffff key normalized raw program_fail_count : 100% 1 erase_fail_count : 100% 0 wear_leveling : 71% min: 1461, max: 1481, avg: 1461 end_to_end_error_detection_count: 100% 0 crc_error_count : 100% 0 timed_workload_media_wear : 100% 63.999% timed_workload_host_reads : 100% 65535% timed_workload_timer : 100% 65535 min thermal_throttle_status : 100% 0%, cnt: 0 retry_buffer_overflow_count : 100% 0 pll_lock_loss_count : 100% 0 nand_bytes_written : 100% sectors: 58794603 host_bytes_written : 100% sectors: 36254579

WA=nand_bytes_written / host_bytes_written

df df -h # human read df -T # 显示文件系统类型 df -l # 只显示本地文件系统 df -t ext3 # 只显示ext3类型文件系统 df -a # 显示所有文件系统 fdisk

fdisk主要是操作硬盘的分区

[root@node86 ssh]# fdisk /dev/sda # 进入分区操作界面 WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion. Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): m Command action d delete a partition g create a new empty GPT partition table G create an IRIX (SGI) partition table l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): 其中:l列出分区类型,p打印分区表,d删除分区,n增加新分区,w写入分区,q退出。 lsscsi [root@node86 ssh]# lsscsi [0:0:45:0] enclosu MSCC SXP 48x12G RevB - [0:2:0:0] disk AVAGO MR9361-8i 4.68 /dev/sda [0:2:2:0] disk AVAGO MR9361-8i 4.68 /dev/sdb [0:2:4:0] disk AVAGO MR9361-8i 4.68 /dev/sdc 其中:[host:channel:id:lun] 设备类型 厂商信息 型号信息 版本信息 硬盘标识 [root@node86 ssh]# lsscsi -v # 显示详细信息 [0:0:45:0] enclosu MSCC SXP 48x12G RevB - dir: /sys/bus/scsi/devices/0:0:45:0 [/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/host0/target0:0:45/0:0:45:0] [0:2:0:0] disk AVAGO MR9361-8i 4.68 /dev/sda dir: /sys/bus/scsi/devices/0:2:0:0 [/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/host0/target0:2:0/0:2:0:0] [0:2:2:0] disk AVAGO MR9361-8i 4.68 /dev/sdb dir: /sys/bus/scsi/devices/0:2:2:0 [/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/host0/target0:2:2/0:2:2:0] [0:2:4:0] disk AVAGO MR9361-8i 4.68 /dev/sdc dir: /sys/bus/scsi/devices/0:2:4:0 [/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/host0/target0:2:4/0:2:4:0] [root@node86 ssh]# lsscsi -c # 等同cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Target: 45 Lun: 00 Vendor: MSCC Model: SXP 48x12G Rev: RevB Type: Enclosure ANSI SCSI revision: 06 Host: scsi0 Channel: 02 Target: 00 Lun: 00 Vendor: AVAGO Model: MR9361-8i Rev: 4.68 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Target: 02 Lun: 00 Vendor: AVAGO Model: MR9361-8i Rev: 4.68 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Target: 04 Lun: 00 Vendor: AVAGO Model: MR9361-8i Rev: 4.68 Type: Direct-Access ANSI SCSI revision: 05 [root@node86 ssh]# lsscsi -s # 显示容量 [0:0:45:0] enclosu MSCC SXP 48x12G RevB - - [0:2:0:0] disk AVAGO MR9361-8i 4.68 /dev/sda 1.79TB [0:2:2:0] disk AVAGO MR9361-8i 4.68 /dev/sdb 1.79TB [0:2:4:0] disk AVAGO MR9361-8i 4.68 /dev/sdc 1.79TB blkid

显示块设备信息

[root@node86 ssh]# blkid /dev/sda1: SEC_TYPE="msdos" UUID="489E-4A6D" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="50d21d19-c660-458e-90cb-d84be07eb8e1" /dev/sda2: UUID="32fb3f32-9f57-4789-84c0-ec7f74ce9e53" TYPE="xfs" PARTUUID="a9f5c392-d660-4932-befd-94eca1385594" /dev/sda3: UUID="adbdcdd5-0aa5-41c6-8d7e-03b2852e9f67" TYPE="swap" PARTUUID="c53ddafd-475a-49da-8221-20d399d3bff7" /dev/sda5: UUID="4c43166f-3456-45bf-a279-bd07632164ea" TYPE="xfs" PARTUUID="77e82a83-63fe-4858-98ad-ff09ab99c843" /dev/sda6: UUID="9199e993-7661-497a-bc60-5c3dd6d7afbb" TYPE="xfs" PARTUUID="a5993e4e-d096-491b-8af6-feff191c610f" /dev/sdb1: SEC_TYPE="msdos" UUID="8FC4-55B3" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="7203c214-c04f-4caf-af47-730a961119d0" /dev/sdb2: UUID="b8abda60-5823-4dbf-a369-0ca078266019" TYPE="xfs" PARTUUID="4f1cb80d-3dd5-4798-a153-1c51951360e0" /dev/nvme0n1: PTTYPE="gpt" /dev/nvme3n1: PTTYPE="gpt" /dev/nvme2n1: PTTYPE="gpt" /dev/nvme1n1: PTTYPE="gpt" /dev/sda4: PARTUUID="bebdddd3-c1d3-47b5-b823-f210af866d93" /dev/sdc: PTTYPE="gpt" # 显示所有已知的文件系统或RAID类型 blkid -k # 过滤显示 blkid -u # 根据usage过滤 blkid -n # 根据类型过滤 lshw

获取计算机硬件信息

[root@node86 ssh]# lshw -class disk # 获取硬盘信息 *-disk:0 description: SCSI Disk product: MR9361-8i vendor: AVAGO physical id: 2.0.0 bus info: scsi@0:2.0.0 logical name: /dev/sda version: 4.68 serial: 00f03910f82c94fb29c092b604b26200 size: 1676GiB (1799GB) capabilities: gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=5 guid=b487af19-3200-422f-9a46-6b22a472d656 logicalsectorsize=512 sectorsize=4096 *-disk:1 description: SCSI Disk product: MR9361-8i vendor: AVAGO physical id: 2.2.0 bus info: scsi@0:2.2.0 logical name: /dev/sdb version: 4.68 serial: 00cd08cc7885538128d059100fb00506 size: 1676GiB (1799GB) capabilities: gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=5 guid=f90fd116-e809-4c06-b9b0-862ecbd3aa8b logicalsectorsize=512 sectorsize=4096 *-disk:2 description: SCSI Disk product: MR9361-8i vendor: AVAGO physical id: 2.4.0 bus info: scsi@0:2.4.0 logical name: /dev/sdc version: 4.68 serial: 005e8c7b7990538128d059100fb00506 size: 1676GiB (1799GB) capabilities: gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=5 guid=02c1a9e7-60c7-4dd9-9d8d-3655fed8ff97 logicalsectorsize=512 sectorsize=4096 # 显示sata控制器 RAID控制器 直连的nvme 等信息 [root@node86 ssh]# lshw -class storage # 获取存储信息 *-sata:0 description: SATA controller product: C620 Series Chipset Family SSATA Controller [AHCI mode] vendor: Intel Corporation physical id: 11.5 bus info: pci@0000:00:11.5 version: 09 width: 32 bits clock: 66MHz capabilities: sata msi pm ahci_1.0 bus_master cap_list configuration: driver=ahci latency=0 resources: irq:342 memory:9d206000-9d207fff memory:9d209000-9d2090ff ioport:3070(size=8) ioport:3060(size=4) ioport:3020(size=32) memory:9d180000-9d1fffff *-sata:1 description: SATA controller product: C620 Series Chipset Family SATA Controller [AHCI mode] vendor: Intel Corporation physical id: 17 bus info: pci@0000:00:17.0 version: 09 width: 32 bits clock: 66MHz capabilities: sata msi pm ahci_1.0 bus_master cap_list configuration: driver=ahci latency=0 resources: irq:695 memory:9d204000-9d205fff memory:9d208000-9d2080ff ioport:3050(size=8) ioport:3040(size=4) ioport:3000(size=32) memory:9d100000-9d17ffff *-raid description: RAID bus controller product: MegaRAID SAS-3 3108 [Invader] vendor: Broadcom / LSI physical id: 0 bus info: pci@0000:18:00.0 logical name: scsi0 version: 02 width: 64 bits clock: 33MHz capabilities: raid pm pciexpress vpd msi msix bus_master cap_list rom configuration: driver=megaraid_sas latency=0 resources: irq:252 ioport:5000(size=256) memory:aa200000-aa20ffff memory:aa100000-aa1fffff memory:aa000000-aa0fffff *-raid:0 description: RAID bus controller product: Volume Management Device NVMe RAID Controller vendor: Intel Corporation physical id: b bus info: pci@0000:17:05.5 version: 07 width: 64 bits clock: 33MHz capabilities: raid msix pm bus_master cap_list configuration: driver=vmd latency=0 resources: iomemory:381f0-381ef iomemory:381f0-381ef irq:0 memory:381ff8000000-381ff9ffffff memory:a8000000-a9ffffff memory:381ffff00000-381fffffffff *-raid:1 description: RAID bus controller product: Volume Management Device NVMe RAID Controller vendor: Intel Corporation physical id: 53 bus info: pci@0000:3a:05.5 version: 07 width: 64 bits clock: 33MHz capabilities: raid msix pm bus_master cap_list configuration: driver=vmd latency=0 resources: iomemory:382f0-382ef iomemory:382f0-382ef irq:0 memory:382ffc000000-382ffdffffff memory:b6000000-b7ffffff memory:382ffff00000-382fffffffff *-raid:2 description: RAID bus controller product: Volume Management Device NVMe RAID Controller vendor: Intel Corporation physical id: 79 bus info: pci@0000:5d:05.5 version: 07 width: 64 bits clock: 33MHz capabilities: raid msix pm bus_master cap_list configuration: driver=vmd latency=0 resources: iomemory:383f0-383ef iomemory:383f0-383ef irq:0 memory:383ffc000000-383ffdffffff memory:c0000000-c1ffffff memory:383ffff00000-383fffffffff *-raid:3 description: RAID bus controller product: Volume Management Device NVMe RAID Controller vendor: Intel Corporation physical id: 96 bus info: pci@0000:85:05.5 version: 07 width: 64 bits clock: 33MHz capabilities: raid msix pm bus_master cap_list configuration: driver=vmd latency=0 resources: iomemory:385f0-385ef iomemory:385f0-385ef irq:0 memory:385ffc000000-385ffdffffff memory:dc000000-ddffffff memory:385ffff00000-385fffffffff *-nvme description: Non-Volatile memory controller product: NVMe Datacenter SSD [3DNAND, Beta Rock Controller] vendor: Intel Corporation physical id: 0 bus info: pci@0000:af:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: nvme pm msix pciexpress msi nvm_express bus_master cap_list rom configuration: driver=nvme latency=0 resources: irq:334 memory:ee310000-ee313fff memory:ee300000-ee30ffff *-nvme description: Non-Volatile memory controller product: NVMe Datacenter SSD [3DNAND, Beta Rock Controller] vendor: Intel Corporation physical id: 0 bus info: pci@0000:b0:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: nvme pm msix pciexpress msi nvm_express bus_master cap_list rom configuration: driver=nvme latency=0 resources: irq:335 memory:ee210000-ee213fff memory:ee200000-ee20ffff *-nvme description: Non-Volatile memory controller product: NVMe Datacenter SSD [3DNAND, Beta Rock Controller] vendor: Intel Corporation physical id: 0 bus info: pci@0000:b1:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: nvme pm msix pciexpress msi nvm_express bus_master cap_list rom configuration: driver=nvme latency=0 resources: irq:338 memory:ee110000-ee113fff memory:ee100000-ee10ffff *-nvme description: Non-Volatile memory controller product: NVMe Datacenter SSD [3DNAND, Beta Rock Controller] vendor: Intel Corporation physical id: 0 bus info: pci@0000:b2:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: nvme pm msix pciexpress msi nvm_express bus_master cap_list rom configuration: driver=nvme latency=0 resources: irq:340 memory:ee010000-ee013fff memory:ee000000-ee00ffff *-raid:4 description: RAID bus controller product: Volume Management Device NVMe RAID Controller vendor: Intel Corporation physical id: db bus info: pci@0000:ae:05.5 version: 07 width: 64 bits clock: 33MHz capabilities: raid msix pm bus_master cap_list configuration: driver=vmd latency=0 resources: iomemory:386f0-386ef iomemory:386f0-386ef irq:0 memory:386ffc000000-386ffdffffff memory:ec000000-edffffff memory:386ffff00000-386fffffffff *-raid:5 description: RAID bus controller product: Volume Management Device NVMe RAID Controller vendor: Intel Corporation physical id: f5 bus info: pci@0000:d7:05.5 version: 07 width: 64 bits clock: 33MHz capabilities: raid msix pm bus_master cap_list configuration: driver=vmd latency=0 resources: iomemory:387f0-387ef iomemory:387f0-387ef irq:0 memory:387ffc000000-387ffdffffff memory:f8000000-f9ffffff memory:387ffff00000-387fffffffff [root@node86 ssh]# exportfs

exportfs是属于nfs-util里的命令,应用于nfs共享存储目录的挂载和卸载。

exportfs -v # 显示已经共享的目录 exportfs -arv # 不重启NFS服务的情况下重新加载NFS配置文件(常用) -a 全部挂载或者全部卸载 -r 重新挂载 -u 卸载某一个目录 -v 显示共享目录 mount

mount用来挂载文件系统,挂载的可以是硬盘、分区、内存虚拟的硬盘、大文件虚拟的硬盘。

mount /dev/sda1 /mnt/nfs # 挂载硬盘到挂载点 mount vdisk.img /mnt # 将大文件虚拟硬盘挂载到挂载点 mount -t tmpfs -o size=512m tmpfs /mnt # 构建内存硬盘挂载到挂载点 mount -t nfs -o vers=3 197.16.102.234:/NAS/CAPFS/data/nfs /mnt/nfs # 挂载nfs文件系统 mount -t tmpfs # 显示tmpfs格式文件系统 mount -o ro /dev/sda1 /mnt # 只读挂载 mount /mnt -o rw,remount # 重新挂载为读写模式 mount -t proc none /mnt # 将proc类型挂载到/mnt,proc 是内核虚拟的一个文件系统,none可以是任意字符串 storcli

storcli可以用来查看和设置RAID的一些功能。

# RAID控制器名称 /usr/local/DevManage/example/get_or_set/devshow_proc_demo -G -RAID_NAME # RAID控制器信息(缓存、总线带宽、PCI地址,总线类型等) /usr/local/DevManage/example/get_or_set/devshow_proc_demo -G -RAID_INFO RaidCtrl0 # RAID信息(RAID卡型号,虚拟、物理设备数等) /opt/MegaRAID/storcli/storcli64 show # RAID模式 /opt/MegaRAID/storcli/storcli64 /c0 show personality # 查看RAID是否为jbod模式 storcli /c0 show jbod # RAID卡相关的驱动、固件、BIOS等版本号 storcli /c0 show all|grep -i version megacli

RAID下的硬盘信息查询

megacli -PDList -aALL # 显示RAID下硬盘序列号、插槽号等


【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3